
Cocojunk
🚀 Dive deep with CocoJunk – your destination for detailed, well-researched articles across science, technology, culture, and more. Explore knowledge that matters, explained in plain English.
Internet manipulation
Read the original article here.
Internet Manipulation: An Educational Resource on How Bots and Automation Shape Our Online Reality
(Context: The Dead Internet Files)
This resource explores the multifaceted phenomenon of internet manipulation, examining its methods, actors, and impacts. Within the framework of "The Dead Internet Files," we will pay particular attention to how automation, algorithms, and bots contribute to shaping online discourse, potentially creating an environment where simulated or directed activity can overshadow genuine human interaction.
What is Internet Manipulation?
Internet manipulation involves the strategic use of online digital technologies to influence perceptions and behaviors for specific outcomes. These outcomes can range across various domains, including commercial interests, social dynamics, military objectives, and political agendas.
Tools frequently employed in internet manipulation include:
- Algorithms: Automated processes that determine what content users see, often based on personalization but also potentially used to filter or prioritize information.
- Social Bots: Automated software programs designed to mimic human users on social media and online platforms, posting content, liking, sharing, and interacting.
- Automated Scripts: Code that performs repetitive tasks, such as generating fake accounts, mass messaging, or manipulating online polls.
Internet and social media manipulation are primary drivers of disinformation in the modern era, leveraging the central role digital platforms play in news consumption and communication.
When directed at political goals, manipulation can aim to:
- Steer public opinion in a particular direction.
- Increase polarization among citizens.
- Circulate conspiracy theories.
- Silence or marginalize political dissidents.
Manipulation can also be driven by profit, for example, to damage the reputation of competitors (corporate or political) or enhance one's own brand. Less commonly, the term can also refer to the selective enforcement of internet censorship or violations of net neutrality principles.
Computational Propaganda: A specific form of internet manipulation involving propaganda purposes, often utilizing data analysis and internet bots on social media platforms to spread targeted messages and create the illusion of widespread support or dissent.
Mechanisms and Effects of Internet Manipulation
Internet manipulation fundamentally seeks to alter user perceptions and, consequently, their actions. This goal is sometimes referred to as a form of "cognitive hacking."
Cognitive Hacking: A concept describing cyberattacks or digital manipulations aimed at changing human perception, beliefs, and behavior, rather than directly targeting technical systems.
Modern forms of cognitive hacking, such as fake news, disinformation attacks, and deepfakes (synthetic media where a person's likeness is replaced with someone else's), can subtly influence behavior in ways that are difficult for the average user to detect.
Several psychological and social factors are leveraged in online manipulation:
- Emotional Resonance: Content designed to evoke strong, high-arousal emotions (like awe, anger, anxiety, or curiosity often with hidden sexual undertones) tends to be more viral and spread quickly. Manipulators exploit this by crafting emotionally charged narratives, regardless of their factual basis.
- Simplicity and Narratives: Providing simple, easily digestible explanations for complex situations is a common manipulation tactic. These often gain traction and spread more rapidly than nuanced, complex analyses, especially in the absence of immediate, thorough investigations.
- Influence of Prior Ratings: Users' perception of online content is significantly influenced by how others appear to have rated or interacted with it.
Bandwagon/Snowball Voting: An observed phenomenon on platforms like Reddit where content that receives initial upvotes or downvotes tends to attract more votes in the same direction, creating a self-reinforcing positive or negative trajectory. Manipulators can exploit this by using automated bots to provide initial votes, skewing the perception of content popularity.
- Prevalence and the Mere-Exposure Effect: The mere act of being exposed to information, even if not fully read or processed critically, can increase its perceived familiarity, truthfulness, or importance. Fake news headlines and sound bites can have an effect simply through repeated exposure across a user's feed, even if the linked article isn't clicked. Manipulators can amplify specific points, views, or even simulate the apparent prevalence of people or opinions using automated accounts.
Mere-exposure effect: A psychological phenomenon where people tend to develop a preference for things merely because they are familiar with them. In online manipulation, repeated exposure to a false or biased message, often through automated amplification, can increase its perceived credibility.
- Targeted Messaging: Social media activities and other online data can be extensively analyzed to build detailed profiles of individuals' personalities, preferences, and potential behaviors. Techniques developed by researchers like Michal Kosinski allow for highly granular targeting. This enables manipulators to tailor disinformation or propaganda messages to resonate most effectively with a specific person's psychological predispositions, as seen in reported applications during political campaigns.
The Role of Algorithms, Echo Chambers, and Polarization
The sheer volume of content online necessitates algorithms to help users find relevant information. However, these same algorithms, particularly on social networking platforms and search engines, contribute significantly to internet manipulation and its downstream effects.
Algorithms personalize user feeds based on past interactions, expressed preferences, and inferred interests. While intended to improve user experience, this personalization can restrict exposure to differing viewpoints, inadvertently or intentionally creating:
Echo Chambers: Online spaces where individuals are primarily exposed to information and opinions that align with their own, reinforcing their existing beliefs and limiting exposure to alternative perspectives. Filter Bubbles: A state of intellectual isolation that can result from algorithms selecting content users see based on data about them (such as location, past click behavior, and search history), effectively filtering out information that conflicts with their viewpoints.
Filter bubbles can distort a user's perception of reality by giving the impression that a particular perspective is widely shared and accepted, even if it is a niche or manipulated view. The surprise expressed by many after events like the UK's Brexit referendum and the 2016 US Presidential election highlighted this potential disconnect between individual algorithmic realities and broader public opinion.
While research continues on the precise degree to which algorithms cause polarization versus merely amplify existing tendencies or user choices, it's clear that the algorithmic filtering process is a significant component of the modern online information landscape, one that can be exploited for manipulative purposes to control the range of opinions a user encounters.
Tools and Tactics in Practice
Internet manipulation employs a variety of tools and tactics to achieve its goals:
- Social Bots and Fake Accounts: Used to simulate followers, generate likes, share content, post comments, and participate in discussions, creating an artificial appearance of support or activity.
- Content Generation: Producing fake news articles, fabricated testimonials, manipulated images, or deepfake videos.
- Content Amplification: Using bots or coordinated networks of fake accounts to share, retweet, or upvote specific content to make it appear more popular or authoritative than it is.
- Content Suppression: Using bots to mass-report legitimate content or accounts to get them removed or downranked, or flooding platforms with irrelevant or abusive content to drown out opposing voices.
- False Attribution: Creating content and falsely attributing it to a different source or individual to damage their reputation or lend false credibility.
- False Flag Operations: Manipulative actions where content is presented as if it originated from a different entity than the true source.
False Flag Operation (Online Context): An act of internet manipulation where an activity (like posting content or launching an attack) is carried out by one party but made to appear as if it was done by another party, often to incriminate or discredit the perceived enemy.
- Credential Harvesting: A tactic involving gathering information or compromising accounts to gain access to legitimate platforms or identities, sometimes using unwitting individuals (like journalists) to disseminate information or gain access to targets.
Actors Involved in Internet Manipulation
A wide range of actors utilize internet manipulation techniques:
- Government and Military Agencies: State actors are major players in internet manipulation, using it for intelligence gathering, propaganda, psychological operations ("Effects" operations), and undermining adversaries.
- Examples:
- The UK's GCHQ Joint Threat Research Intelligence Group (JTRIG) has been documented using "dirty tricks" like injecting false material online, manipulating discourse, and using false flags to "destroy, deny, degrade [and] disrupt" targets.
- Russia is widely accused of financing "troll farms" like the Internet Research Agency to post pro-Russian propaganda under fake identities and interfere in foreign elections (e.g., the 2016 US election) and undermine democratic processes.
- China's "50-cent army" and "Internet Water Army" are believed to be state-directed groups aiming to reinforce favorable opinions of the government and suppress dissent online.
- Ukraine reportedly created an "i-Army" of social media accounts to counter Russian propaganda.
- Bots spreading pro-government narratives have been observed during political events, such as tweets related to the disappearance of Jamal Khashoggi.
- Reports indicate some governments hire private firms (like Alp Services for the UAE) to conduct manipulation campaigns using fictitious accounts and fabricated content.
Astroturfing: The deceptive practice of presenting an orchestrated activity or campaign as if it were a spontaneous, grassroots movement. In internet manipulation, this often involves using numerous fake accounts or bots to simulate widespread public support or opposition. Sockpuppet (Internet): An online identity used for purposes of deception. A sockpuppet account is controlled by a person who already has at least one other account, used to create a false appearance of support, participate in discussions as if a different person, or manipulate online polls/ratings.
- Examples:
- Political Campaigns and Parties: Political actors frequently employ manipulation tactics, including hiring professionals or using internal teams to spread favorable content, attack opponents, or simulate popular support online.
- Examples: Andrés Sepúlveda claimed to have manipulated elections in Latin America through hacking, social media manipulation (creating "false waves of enthusiasm and derision"), and installing spyware. Parties in India have faced accusations of hiring "political trolls."
- Businesses and Marketing Firms: Internet manipulation is used for competitive advantage, managing reputation, conducting smear campaigns against rivals, and creating artificial hype around products or services. Disinformation can be weaponized in large-scale marketing campaigns.
- Individuals and Groups: This includes professional manipulators for hire, politically motivated hacktivists, and individuals engaging in trolling or other disruptive behaviors, sometimes utilizing automated tools like votebots or clickbots for non-political purposes (e.g., manipulating online polls for pranks, as seen with 4chan and the Time magazine poll).
Internet Manipulation and "The Dead Internet Files"
The phenomenon of internet manipulation, particularly its reliance on automated tools and coordinated campaigns, directly relates to the concerns raised by "The Dead Internet Files." The core idea is that a significant and perhaps growing portion of online activity and content is not genuine human interaction but rather generated, amplified, or suppressed by bots, algorithms, and deliberate manipulative efforts.
This creates an online environment where:
- Prevalence is Engineered: What appears popular or widely supported may simply be the result of automated amplification and fake accounts, not genuine consensus.
- Discourse is Distorted: Conversations can be steered, dissenting voices drowned out, and false narratives given prominence by non-human or directed entities.
- Reality is Simulated: Filter bubbles and targeted messaging create personalized versions of reality for users, crafted not by diverse human interaction but by algorithms and content designed to fit a profile or agenda.
- Authenticity is Difficult to Discern: It becomes increasingly hard to distinguish between content generated by real people expressing genuine opinions and content produced or promoted by automated systems or paid manipulators.
In this context, bots and automated systems haven't necessarily replaced all humans online, but they can silently replace or significantly diminish the perceived impact and visibility of genuine human interaction and organic content. The internet can feel "dead" or artificial because a large portion of the activity a user encounters might be synthetic or manipulated rather than authentic.
Countermeasures and Responses
Addressing internet manipulation is a complex challenge involving technical, educational, and regulatory approaches.
Platform and Technological Measures:
- Platforms can hide metrics like vote counts for a period to mitigate bandwagon effects.
- Implementing systems for users and/or third-party fact-checkers to flag potentially false or misleading content.
- Developing better algorithms and machine learning models to identify and suspend fake accounts, bots, and coordinated inauthentic behavior.
- Using software to detect plagiarism, manipulated media, or track the spread of disinformation.
- Voluntary browser extensions or tools that provide context or link disinformation to debunking information.
- Inventor of the World Wide Web, Tim Berners-Lee, suggests that openness and community governance (like Wikipedia's model) might be more effective than centralized control by a few companies in determining truth online.
Education and Critical Thinking:
- Promoting media literacy education to help users identify manipulative tactics, recognize disinformation, and critically evaluate online sources.
- Encouraging formal logic and analytical thinking training in educational systems.
Government and Regulatory Responses:
- Many countries are proposing or implementing regulations to address aspects of online influence campaigns, fake news, and social media abuse.
- Examples of country-specific actions:
- Germany: Political parties pledged not to use social bots in elections; proposals exist to criminalize using pseudonyms or creating fake accounts on platforms under certain conditions.
- Italy: Communications agency issued guidelines for elections focusing on equal treatment, transparency, forbidden content (like polls), and fact-checking recommendations.
- France: Passed a law during campaign periods requiring platforms to disclose ad authors/costs, have representatives in France, and publish algorithms past a traffic threshold. It also allows judges to issue injunctions against "manifest, massive, and disruptive" fake news.
- Malaysia: Passed an Anti-Fake News Act (later repealed) that criminalized publishing wholly or partly false news.
- Kenya: Criminalized publishing false or misleading data with intent to deceive under a Computer and Cybercrimes bill.
While nation-state rules involving compulsory registration or threats of punishment have been suggested as inadequate against the scale and anonymity of bots and online manipulation, a multi-pronged approach combining technological detection, platform responsibility, user education, and targeted regulation appears necessary.
Conclusion
Internet manipulation, driven significantly by the increasing sophistication and deployment of bots, algorithms, and coordinated inauthentic behavior, is a pervasive force shaping the modern digital landscape. Its effects, ranging from distorted political discourse and simulated public opinion to altered individual perceptions and behaviors, underscore a fundamental challenge to the integrity of online communication.
Within the context of "The Dead Internet Files," internet manipulation serves as a key mechanism by which online activity can feel artificial or detached from genuine human interaction. The ability of automated systems and deliberate campaigns to generate, amplify, and control content on a massive scale raises critical questions about the authenticity of the online world we inhabit and the extent to which our digital experience is increasingly curated and controlled by non-human or directed forces. Understanding these mechanisms is crucial for navigating the complexities of the contemporary internet and fostering a more authentic online environment.